32 research outputs found

    Distilling Information Reliability and Source Trustworthiness from Digital Traces

    Full text link
    Online knowledge repositories typically rely on their users or dedicated editors to evaluate the reliability of their content. These evaluations can be viewed as noisy measurements of both information reliability and information source trustworthiness. Can we leverage these noisy evaluations, often biased, to distill a robust, unbiased and interpretable measure of both notions? In this paper, we argue that the temporal traces left by these noisy evaluations give cues on the reliability of the information and the trustworthiness of the sources. Then, we propose a temporal point process modeling framework that links these temporal traces to robust, unbiased and interpretable notions of information reliability and source trustworthiness. Furthermore, we develop an efficient convex optimization procedure to learn the parameters of the model from historical traces. Experiments on real-world data gathered from Wikipedia and Stack Overflow show that our modeling framework accurately predicts evaluation events, provides an interpretable measure of information reliability and source trustworthiness, and yields interesting insights about real-world events.Comment: Accepted at 26th World Wide Web conference (WWW-17

    Solvent effect on protonation of tpps in water-DMF mixtures

    Get PDF
    The protonation of 5,10,15,20-tetrakis(4-sulfonatophenyl)porphyrin was investigated in aqueous solutions of N,N-dimethyformamide at 25 °C and 0.1 mol.dm-3 sodium perchlorate. The solvent effect on value of protonation constant was examined by using the linear solvation energy relationship concept. The value of logK1,logK2 and logKt was correlated with the macroscopic (dielectric constant) and microscopic Kamlet-Taft parameters (a, b and p*) of binary mixtures. The solvent effects were analyzed in the terms of Kamlet, Abboud and Taft model (KAT). Multiple linear regression were used to find the contribution of the microscopic parameters containing a (hydrogen-bond acidity), p* (dipolarity/polarizability) and b (hydrogen-bond basicity). It was found that a and b were the most predominant descriptors. Also, relationship with reciprocal of dielectric constant was obtained based on Born’s model, showing the significance of specific solute-solvent interactions. Therefore the hydrogen bonding interactions between solute and solvent components are mainly responsible for the change in protonation constants of 5,10,15,20-tetrakis(4-sulfonatophenyl)porphyrin in water- N,N-dimethyformamid binary mixtures. KEY WORDS: Protonation, TPPS, Solvent effects, Aqueous mixture, DMF Bull. Chem. Soc. Ethiop. 2016, 30(3), 457-464DOI: http://dx.doi.org/10.4314/bcse.v30i3.1

    Back to the Source: an Online Approach for Sensor Placement and Source Localization

    Get PDF
    Source localization, the act of finding the originator of a disease or rumor in a network, has become an important problem in sociology and epidemiology. The localization is done using the infection state and time of infection of a few designated sensor nodes; however, maintaining sensors can be very costly in practice. We propose the first online approach to source localization: We deploy a priori only a small number of sensors (which reveal if they are reached by an infection) and then iteratively choose the best location to place new sensors in order to localize the source. This approach allows for source localization with a very small number of sensors; moreover, the source can be found while the epidemic is still ongoing. Our method applies to a general network topology and performs well even with random transmission delays

    Smart Broadcasting: Do you want to be seen?

    No full text
    Many users in online social networks are constantly trying to gain attention from their followers by broadcasting posts to them. These broadcasters are likely to gain greater attention if their posts can remain visible for a longer period of time among their followers' most recent feeds. Then when to post? In this paper, we study the problem of smart broadcasting using the framework of temporal point processes, where we model users feeds and posts as discrete events occurring in continuous time. Based on such continuous-time model, then choosing a broadcasting strategy for a user becomes a problem of designing the conditional intensity of her posting events. We derive a novel formula which links this conditional intensity with the visibility of the user in her followers' feeds. Furthermore, by exploiting this formula, we develop an efficient convex optimization framework for the when-to-post problem. Our method can find broadcasting strategies that reach a desired visibility level with provable guarantees. We experimented with data gathered from Twitter, and show that our framework can consistently make broadcasters' post more visible than alternatives.Comment: To appear in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), San Francisco (CA, USA), 201

    Distilling Information Reliability and Source Trustworthiness from Digital Traces

    No full text
    Online knowledge repositories typically rely on their users or dedicated editors to evaluate the reliability of their content. These evaluations can be viewed as noisy measurements of both information reliability and information source trustworthiness. Can we leverage these noisy evaluations, often biased, to distill a robust, unbiased and interpretable measure of both notions? In this paper, we argue that the temporal traces left by these noisy evaluations give cues on the reliability of the information and the trustworthiness of the sources. Then, we propose a temporal point process modeling framework that links these temporal traces to robust, unbiased and interpretable notions of information reliability and source trustworthiness. Furthermore, we develop an efficient convex optimization procedure to learn the parameters of the model from historical traces. Experiments on real-world data gathered from Wikipedia and Stack Overflow show that our modeling framework accurately predicts evaluation events, provides an interpretable measure of information reliability and source trustworthiness, and yields interesting insights about real-world events
    corecore